Classification with non-i.i.d. sampling

نویسندگان

  • Zheng-Chu Guo
  • Lei Shi
چکیده

β-mixing sequence Reproducing kernel Hilbert spaces ℓ 2-empirical covering number Capacity dependent error bounds a b s t r a c t We study learning algorithms for classification generated by regularization schemes in reproducing kernel Hilbert spaces associated with a general convex loss function in a non-i.i.d. process. Error analysis is studied and our main purpose is to provide an elaborate capacity dependent error bounds by applying concentration techniques involving the ℓ 2-empirical covering numbers.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Extremal Large Deviations in Controlled i.i.d. Processes with Applications to Hypothesis

We consider a controlled i.i.d. process, where several i.i.d. sources are sampled sequentially. At each time instant, a controller determines from which source to obtain the next sample. Any causal sampling policy, possibly history-dependent, may be employed. The purpose is to characterize the extremal large deviations of the sample mean, namely to obtain asymptotic rate bounds (similar to and ...

متن کامل

Nonparametric Regression ( and Classification ) Statistical Machine Learning , Spring 2017

• Note for i.i.d. samples (xi, yi) ∈ R × R, i = 1, . . . , n, we can always write yi = f0(xi) + i, i = 1, . . . , n, where i, i = 1, . . . , n are i.i.d. random errors, with mean zero. Therefore we can think about the sampling distribution as follows: (xi, i), i = 1, . . . , n are i.i.d. draws from some common joint distribution, where E( i) = 0, and yi, i = 1, . . . , n are generated from the ...

متن کامل

A Resampling Technique for Relational Data Graphs

Resampling (a.k.a. bootstrapping) is a computationallyintensive statistical technique for estimating the sampling distribution of an estimator. Resampling is used in many machine learning algorithms, including ensemble methods, active learning, and feature selection. Resampling techniques generate pseudosamples from an underlying population by sampling with replacement from a single sample data...

متن کامل

Restricted isometry property of matrices with independent columns and neighborly polytopes by random sampling

This paper considers compressed sensing matrices and neighborliness of a centrally symmetric convex polytope generated by vectors ±X1, . . . ,±XN ∈ Rn, (N ≥ n). We introduce a class of random sampling matrices and show that they satisfy a restricted isometry property (RIP) with overwhelming probability. In particular, we prove that matrices with i.i.d. centered and variance 1 entries that satis...

متن کامل

Some Further Developments for Stick-breaking Priors: Finite and Infinite Clustering and Classification

SUMMARY. The class of stick-breaking priors and their extensions are considered in classification and clustering problems in which the complexity, the number of possible models or clusters, can be either bounded or unbounded. A conjugacy property for the ex tended stick-breaking prior is established which allows for informative characterizations of the priors under i.i.d. sampling, and which fu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Mathematical and Computer Modelling

دوره 54  شماره 

صفحات  -

تاریخ انتشار 2011